24 research outputs found

    Oceanic Games: Centralization Risks and Incentives in Blockchain Mining

    Full text link
    To participate in the distributed consensus of permissionless blockchains, prospective nodes -- or miners -- provide proof of designated, costly resources. However, in contrast to the intended decentralization, current data on blockchain mining unveils increased concentration of these resources in a few major entities, typically mining pools. To study strategic considerations in this setting, we employ the concept of Oceanic Games, Milnor and Shapley (1978). Oceanic Games have been used to analyze decision making in corporate settings with small numbers of dominant players (shareholders) and large numbers of individually insignificant players, the ocean. Unlike standard equilibrium models, they focus on measuring the value (or power) per entity and per unit of resource} in a given distribution of resources. These values are viewed as strategic components in coalition formations, mergers and resource acquisitions. Considering such issues relevant to blockchain governance and long-term sustainability, we adapt oceanic games to blockchain mining and illustrate the defined concepts via examples. The application of existing results reveals incentives for individual miners to merge in order to increase the value of their resources. This offers an alternative perspective to the observed centralization and concentration of mining power. Beyond numerical simulations, we use the model to identify issues relevant to the design of future cryptocurrencies and formulate prospective research questions.Comment: [Best Paper Award] at the International Conference on Mathematical Research for Blockchain Economy (MARBLE 2019

    Stable Matching with Evolving Preferences

    Full text link
    We consider the problem of stable matching with dynamic preference lists. At each time step, the preference list of some player may change by swapping random adjacent members. The goal of a central agency (algorithm) is to maintain an approximately stable matching (in terms of number of blocking pairs) at all times. The changes in the preference lists are not reported to the algorithm, but must instead be probed explicitly by the algorithm. We design an algorithm that in expectation and with high probability maintains a matching that has at most O((log(n))2)O((log (n))^2) blocking pairs.Comment: 13 page

    Stable Matching with Evolving Preferences

    Get PDF
    We consider the problem of stable matching with dynamic preference lists. At each time-step, the preference list of some player may change by swapping random adjacent members. The goal of a central agency (algorithm) is to maintain an approximately stable matching, in terms of number of blocking pairs, at all time-steps. The changes in the preference lists are not reported to the algorithm, but must instead be probed explicitly. We design an algorithm that in expectation and with high probability maintains a matching that has at most O((log n)^2 blocking pairs

    Lower bounds for uniform read-once threshold formulae in the randomized decision tree model

    Full text link
    We investigate the randomized decision tree complexity of a specific class of read-once threshold functions. A read-once threshold formula can be defined by a rooted tree, every internal node of which is labeled by a threshold function TknT_k^n (with output 1 only when at least kk out of nn input bits are 1) and each leaf by a distinct variable. Such a tree defines a Boolean function in a natural way. We focus on the randomized decision tree complexity of such functions, when the underlying tree is a uniform tree with all its internal nodes labeled by the same threshold function. We prove lower bounds of the form c(k,n)dc(k,n)^d, where dd is the depth of the tree. We also treat trees with alternating levels of AND and OR gates separately and show asymptotically optimal bounds, extending the known bounds for the binary case

    Environmental effects on the growth and biochemical composition of four micoalgae, in relation to their use as food for Mytilus edulis larval rearing.

    Get PDF
    Environmental conditions in form of light intensity, phosphorus and nitrogen limitation were used to manipulate the biochemical composition of continuous cultures of Skeletonema costatum, Chaetoceros muelleri, Rhinomonas reticulata and Pavlova lutheri. Crude protein, carbohydrate and chlorophyll content as well as the fatty acid profile was determined in the combinations of two light intensities (high and low light, HL and LL) and three nutrient conditions (no nutrient limitation, f/2, phosphorus limitation, P, and nitrogen limitation, N). They were fed to Mytilus edulis larvae over a two week period and the larval size and mortality were assessed; the larval fatty acid profile of various batches of eggs as well as after the end of the feeding trial was also determined. A novel computer aided image analysis technique was used for measuring the length ofthe larvae. All monospecific diets supported good growth, sometimes equal or better to a control diet which was a mixture of species (R. reticulata and P. lutheri). In general survival was not affected by the diets and was found to be related more with the specific batch of larvae used. On the contrary growth was correlated with the diet. Ranking of the S. costatum diets was: LL N =LL f/2 =LL P =HL N >Control >HL f/2 =HL P. The C. muelleri diets were ranked as: LL N =LL f/2 >Control >HL f/2 =HL P =HL N >LL P. The R. reticulata diets are ranked, again in decreasing quality order as: HL N =LL f/2 =Control >LL N >HL f/2 =HL P =LL P. The P. lutheri ranking order was: HL N =HL f/2 =HL P =LL P >LL N =LL f/2 >Control. The larvae were analyzed for their fatty acid profile and relative content and some fatty acids were significantly correlated with growth thus enabling the usage of certain fatty acids as an index of growth for M. edulis larvae. Larval 20:5(0:3 and Polyunsaturated fatty acids (PUFA) were a positive index of growth while 15:0 and Saturated Fatty acids (SaFA) were a negative index. A multidimensional model was used in an effort to correlate algal biochemical components with larval growth. Some fatty acids were found to be the main factors in determining the algal biochemical composition with protein and carbohydrate playing a secondary "modifying" role. In the case of P. lutheri the 16:0 and SaFA were positively correlated with larval growth in an almost linear fashion while omega: 3 fatty acids were negatively correlated with larval growth. A positive correlation concerning the 16:0 and a negative one for the PUFA was also established in S. costatum and R. rcticulata

    The Bitcoin Backbone Protocol: Analysis and Applications

    Get PDF
    Bitcoin is the first and most popular decentralized cryptocurrency to date. In this work, we extract and analyze the core of the Bitcoin protocol, which we term the Bitcoin backbone, and prove two of its fundamental properties which we call common prefix and chain quality in the static setting where the number of players remains fixed. Our proofs hinge on appropriate and novel assumptions on the hashing power of the adversary relative to network synchronicity; we show our results to be tight under high synchronization. Next, we propose and analyze applications that can be built on top of the backbone protocol, specifically focusing on Byzantine agreement (BA) and on the notion of a public transaction ledger. Regarding BA, we observe that Nakamoto\u27s suggestion falls short of solving it, and present a simple alternative which works assuming that the adversary\u27s hashing power is bounded by 1/3. The public transaction ledger captures the essence of Bitcoin\u27s operation as a cryptocurrency, in the sense that it guarantees the liveness and persistence of committed transactions. Based on this notion we describe and analyze the Bitcoin system as well as a more elaborate BA protocol, proving them secure assuming high network synchronicity and that the adversary\u27s hashing power is strictly less than 1/2, while the adversarial bound needed for security decreases as the network desynchronizes. Finally, we show that our analysis of the Bitcoin backbone protocol for synchronous networks extends with relative ease to the recently considered partially synchronous model, where there is an upper bound in the delay of messages that is unknown to the honest parties

    Ordering Transactions with Bounded Unfairness: Definitions, Complexity and Constructions

    Get PDF
    An important consideration in the context of distributed ledger protocols is fairness in terms of transaction ordering. Recent work [Crypto 2020] revealed a deep connection of (receiver) order fairness to social choice theory and related impossibility results arising from the Condorcet paradox. As a result of the impossibility, various relaxations of order fairness were investigated in prior works. Given that distributed ledger protocols, especially those processing smart contracts, must serialize the input transactions, a natural objective is to minimize the distance (in terms of injected number of transactions) between any pair of unfairly ordered transactions in the output ledger — a concept we call bounded unfairness. In state machine replication (SMR) parlance this asks for minimizing the number of unfair state updates occurring before the processing of any transaction. This unfairness minimization objective gives rise to a natural class of parametric order fairness definitions that has not been studied before. As we observe, previous realizable relaxations of order fairness do not yield good unfairness bounds. Achieving optimal order fairness in the sense of bounded unfairness turns out to be connected to the graph theoretic properties of the underlying transaction dependency graph and specifically the bandwidth metric of strongly connected components in this graph. This gives rise to a specific instance of the definition that we call ``directed bandwidth order-fairness\u27\u27 which we show that it captures the best possible that any protocol can achieve in terms of bounding unfairness. We prove ordering transactions in this fashion is NP-hard and non-approximable for any constant ratio. Towards realizing the property, we put forth a new distributed ledger protocol called Taxis that achieves directed bandwidth order-fairness in the permissionless setting. We present two variants of our protocol, one that matches the property perfectly but (necessarily) lacks in performance and liveness, and a second variant that achieves liveness and better complexity while offering a slightly relaxed version of the directed bandwidth definition. Finally, we comment on applications of our work to social choice theory, a direction which we believe to be of independent interest

    The Bitcoin Backbone Protocol with Chains of Variable Difficulty

    Get PDF
    Bitcoin’s innovative and distributedly maintained blockchain data structure hinges on the adequate degree of difficulty of so-called “proofs of work,” which miners have to produce in order for transactions to be inserted. Importantly, these proofs of work have to be hard enough so that miners have an opportunity to unify their views in the presence of an adversary who interferes but has bounded computational power, but easy enough to be solvable regularly and enable the miners to make progress. As such, as the miners’ population evolves over time, so should the difficulty of these proofs. Bitcoin provides this adjustment mechanism, with empirical evidence of a constant block generation rate against such population changes. In this paper we provide the first (to our knowledge) formal analysis of Bitcoin’s target (re)calculation function in the cryptographic setting, i.e., against all possible adversaries aiming to subvert the protocol’s properties. We extend the q-bounded synchronous model of the Bitcoin backbone protocol [Eurocrypt 2015], which posed the basic properties of Bitcoin’s underlying blockchain data structure and shows how a robust public transaction ledger can be built on top of them, to environments that may introduce or suspend parties in each round. We provide a set of necessary conditions with respect to the way the population evolves under which the “Bitcoin backbone with chains of variable difficulty” provides a robust transaction ledger in the presence of an actively malicious adversary controlling a fraction of the miners strictly below 50% in each instant of the execution. Our work introduces new analysis techniques and tools to the area of blockchain systems that may prove useful in analyzing other blockchain protocols

    How Does Nakamoto Set His Clock? Full Analysis of Nakamoto Consensus in Bounded-Delay Networks

    Get PDF
    Nakamoto consensus, arguably the most exciting development in distributed computing in the last few years, is in a sense a recasting of the traditional state-machine-replication problem in an unauthenticated setting, where furthermore parties come and go without warning. The protocol relies on a cryptographic primitive known as proof of work (PoW) which is used to throttle message passing. Importantly, the PoW difficulty level is appropriately adjusted throughout the course of the protocol execution relying on the blockchain’s timekeeping ability. While the original formulation was only accompanied by rudimentary analysis, significant and steady progress has been made in abstracting the protocol’s properties and providing a formal analysis under various restrictions and protocol simplifications. Still, a full analysis of the protocol that includes its target recalculation and, notably, the timestamp adjustment mechanism—specifically, the protocol allows incoming block timestamps in the near future, as determined by a protocol parameter, and rejects blocks that have a timestamp in the past of the median time of a specific number of blocks on-chain (namely, 11)— which equip it to operate in its intended setting of bounded communication delays, imperfect clocks and dynamic participation, has remained open. The gap is that Nakamoto’s protocol fundamentally depends on the blockchain itself to be a consistent timekeeper that should advance roughly on par with real time. In order to tackle this question we introduce a new analytical tool that we call hot-hand executions, which capture the regular occurrence of high concentration of honestly generated blocks, and correspondingly put forth and prove a new blockchain property called concentrated chain quality, which may be of independent interest. Utilizing these tools and techniques we demonstrate that Nakamoto’s protocol achieves, under suitable conditions, safety, liveness as well as (consistent) timekeeping
    corecore